Goto

Collaborating Authors

 explainability matter


Automated AML for banks--and why explainability matters

#artificialintelligence

If you want to automate as much of your anti-money laundering (AML) processes as possible to end the painful cycle of hiring and firing, or you're aware your current AML controls are not dynamic and explainable and it's finally time to do something about it, this blog post is for you. First off, digitally transforming a compliance function can sometimes seem like attempting to service a Boeing 747 engine at 35,000ft. The reality couldn't be more different. New platform technologies, designed for non-technical business users, enable any bank to easily implement transformative change to the automation of their AML processes. And getting set up can be achieved in a matter of weeks.


Scaling AI: 3 Reasons Why Explainability Matters

#artificialintelligence

As artificial intelligence and machine learning-based systems become more ubiquitous in decision-making, should we expect our confidence in the outcomes to remain like that of its human collaborators? When humans make decisions, we're able to rationalize the outcomes through inquiry and conversation around how expert judgment, experience and use of available information led to the decision. To borrow the words of former Secretary of Defense Ash Carter when speaking at a 2019 SXSW panel about post-analysis of an AI-enabled decision, "'the machine did it' won't fly." As we evolve human and machine collaboration, establishing trust, transparency and accountability at the onset of decision support system and algorithm design is paramount. Without it, people may be hesitant to trust AI recommendations because of a lack of transparency into how the machine reached its outcome.